181 research outputs found

    Global exponential synchronization of quaternion-valued memristive neural networks with time delays

    Get PDF
    This paper extends the memristive neural networks (MNNs) to quaternion field, a new class of neural networks named quaternion-valued memristive neural networks (QVMNNs) is then established, and the problem of drive-response global synchronization of this type of networks is investigated in this paper. Two cases are taken into consideration: one is with the conventional differential inclusion assumption, the other without. Criteria for the global synchronization of these two cases are achieved respectively by appropriately choosing the Lyapunov functional and applying some inequality techniques. Finally, corresponding simulation examples are presented to demonstrate the correctness of the proposed results derived in this paper

    Paradoxes and resolutions for semiparametric fusion of individual and summary data

    Full text link
    Suppose we have available individual data from an internal study and various types of summary statistics from relevant external studies. External summary statistics have been used as constraints on the internal data distribution, which promised to improve the statistical inference in the internal data; however, the additional use of external summary data may lead to paradoxical results: efficiency loss may occur if the uncertainty of summary statistics is not negligible and large estimation bias can emerge even if the bias of external summary statistics is small. We investigate these paradoxical results in a semiparametric framework. We establish the semiparametric efficiency bound for estimating a general functional of the internal data distribution, which is shown to be no larger than that using only internal data. We propose a data-fused efficient estimator that achieves this bound so that the efficiency paradox is resolved. Besides, a debiased estimator is further proposed which has selection consistency property by employing adaptive lasso penalty so that the resultant estimator can achieve the same asymptotic distribution as the oracle one that uses only unbiased summary statistics, which resolves the bias paradox. Simulations and application to a Helicobacter pylori infection dataset are used to illustrate the proposed methods.Comment: 16 pages, 3 figure

    Cellulase Recycling after High-Solids Simultaneous Saccharification and Fermentation of Combined Pretreated Corncob

    Get PDF
    Despite the advantageous prospect of second-generation bioethanol, its final commercialization must overcome the primary cost impediment due to enzyme assumption. To solve this problem, this work achieves high-concentration ethanol fermentation and multi-round cellulase recycling through process integration. The optimal time and temperature of the re-adsorption process were determined by monitoring the adsorption kinetics of cellulases. Both glucose and cellobiose inhibited cellulase adsorption. After 96 h of ethanol fermentation, 40% of the initial cellulase remained in the broth, from which 62.5% of the cellulase can be recycled and reused in fresh substrate re-adsorption for 90 min. Under optimum conditions, i.e., pH 5.0, dry matter loading of 15 wt%, cellulase loading of 45 FPU/g glucan, two cycles of fermentation and re-adsorption can yield two-fold increased ethanol outputs and reduce enzyme costs by over 50%. The ethanol concentration in each cycle can be achieved at levels greater than 40 g/L

    Does Momentum Change the Implicit Regularization on Separable Data?

    Full text link
    The momentum acceleration technique is widely adopted in many optimization algorithms. However, there is no theoretical answer on how the momentum affects the generalization performance of the optimization algorithms. This paper studies this problem by analyzing the implicit regularization of momentum-based optimization. We prove that on the linear classification problem with separable data and exponential-tailed loss, gradient descent with momentum (GDM) converges to the L2 max-margin solution, which is the same as vanilla gradient descent. That means gradient descent with momentum acceleration still converges to a low-complexity model, which guarantees their generalization. We then analyze the stochastic and adaptive variants of GDM (i.e., SGDM and deterministic Adam) and show they also converge to the L2 max-margin solution. Technically, to overcome the difficulty of the error accumulation in analyzing the momentum, we construct new potential functions to analyze the gap between the model parameter and the max-margin solution. Numerical experiments are conducted and support our theoretical results
    corecore